Statistical Regularization and Learning Theory Lecture : 17 Wavelet Approximation

نویسندگان

  • Rob Nowak
  • Changfang Zhu
چکیده

, where C0 = ∫ 1 0 f(t)dt. j ∼ scale, k ∼ position. Key properties: 1. vanishing moments: We say ψ has M vanishing moments when ∫ 1 0 tψj,k(t)dt = 0 form = 0, 1, . . . ,M− 1 This means the wavelet is “blind” to polynomial segment with degree α ≤M − 1 2. compact support: a daubechies wavelet with M vanishing moments has support ∝ 2M. Together, these properties imply that only O(l log n) nonzero wavelet coefficients are needed to represent a piecewise polynomial function with l pieces and degree α ≤M − 1 on each piece.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Mathematisches Forschungsinstitut Oberwolfach Learning Theory and Approximation

Mathematical analysis of learning algorithms consists of bias measured by various kinds of approximation errors and variance investigated by probability and statistical analysis. This workshop has dealt with new developments and achievements from the past ten years, such as sparsity and dimension reduction for huge dimensional data, kernel learning and approximation by integral operators, or no...

متن کامل

Verification of an Evolutionary-based Wavelet Neural Network Model for Nonlinear Function Approximation

Nonlinear function approximation is one of the most important tasks in system analysis and identification. Several models have been presented to achieve an accurate approximation on nonlinear mathematics functions. However, the majority of the models are specific to certain problems and systems. In this paper, an evolutionary-based wavelet neural network model is proposed for structure definiti...

متن کامل

On Stochastic Optimization and Statistical Learning in Reproducing Kernel Hilbert Spaces by Support Vector Machines (SVM)

The paper studies stochastic optimization problems in Reproducing Kernel Hilbert Spaces (RKHS). The objective function of such problems is a mathematical expectation functional depending on decision rules (or strategies), i.e. on functions of observed random parameters. Feasible rules are restricted to belong to a RKHS. This kind of problems arises in on-line decision making and in statistical ...

متن کامل

Statistical Regularization and Learning Theory Lecture : 12 Complexity Regularization in Regression

Example 1 To illustrate the distinction between classification and regression, consider a simple, scalar signal plus noise problem. Consider Yi = θ +Wi, i = 1, . . . , n, where θ is a fixed unknown scalar parameter and the Wi are independent, zero-mean, unit variance random variables. Let Ȳ = 1/n ∑n i=1 Yi. Then, according to the Central Limit Theorem, Ȳ is distributed approximately N(θ, 1/n). ...

متن کامل

An Equivalence Between Sparse

This paper shows a relationship between two diierent approximation techniques: the Support Vector Machines (SVM), proposed by V. Vapnik (1995), and a sparse approximation scheme that resembles the Basis Pursuit De-Noising algorithm (Chen, 1995; Chen, Donoho and Saunders, 1995). SVM is a technique which can be derived from the Structural Risk Minimization Principle (Vapnik, 1982) and can be used...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004